Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 108
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PLoS Comput Biol ; 20(1): e1011783, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38206969

RESUMO

Neurons throughout the brain modulate their firing rate lawfully in response to sensory input. Theories of neural computation posit that these modulations reflect the outcome of a constrained optimization in which neurons aim to robustly and efficiently represent sensory information. Our understanding of how this optimization varies across different areas in the brain, however, is still in its infancy. Here, we show that neural sensory responses transform along the dorsal stream of the visual system in a manner consistent with a transition from optimizing for information preservation towards optimizing for perceptual discrimination. Focusing on the representation of binocular disparities-the slight differences in the retinal images of the two eyes-we re-analyze measurements characterizing neuronal tuning curves in brain areas V1, V2, and MT (middle temporal) in the macaque monkey. We compare these to measurements of the statistics of binocular disparity typically encountered during natural behaviors using a Fisher Information framework. The differences in tuning curve characteristics across areas are consistent with a shift in optimization goals: V1 and V2 population-level responses are more consistent with maximizing the information encoded about naturally occurring binocular disparities, while MT responses shift towards maximizing the ability to support disparity discrimination. We find that a change towards tuning curves preferring larger disparities is a key driver of this shift. These results provide new insight into previously-identified differences between disparity-selective areas of cortex and suggest these differences play an important role in supporting visually-guided behavior. Our findings emphasize the need to consider not just information preservation and neural resources, but also relevance to behavior, when assessing the optimality of neural codes.


Assuntos
Córtex Visual , Animais , Córtex Visual/fisiologia , Macaca , Disparidade Visual , Encéfalo , Neurônios/fisiologia , Estimulação Luminosa/métodos
2.
bioRxiv ; 2023 Nov 18.
Artigo em Inglês | MEDLINE | ID: mdl-38014023

RESUMO

Since motion can only be defined relative to a reference frame, which reference frame guides perception? A century of psychophysical studies has produced conflicting evidence: retinotopic, egocentric, world-centric, or even object-centric. We introduce a hierarchical Bayesian model mapping retinal velocities to perceived velocities. Our model mirrors the structure in the world, in which visual elements move within causally connected reference frames. Friction renders velocities in these reference frames mostly stationary, formalized by an additional delta component (at zero) in the prior. Inverting this model automatically segments visual inputs into groups, groups into supergroups, etc. and "perceives" motion in the appropriate reference frame. Critical model predictions are supported by two new experiments, and fitting our model to the data allows us to infer the subjective set of reference frames used by individual observers. Our model provides a quantitative normative justification for key Gestalt principles providing inspiration for building better models of visual processing in general.

3.
Philos Trans R Soc Lond B Biol Sci ; 378(1886): 20220344, 2023 09 25.
Artigo em Inglês | MEDLINE | ID: mdl-37545300

RESUMO

A key computation in building adaptive internal models of the external world is to ascribe sensory signals to their likely cause(s), a process of causal inference (CI). CI is well studied within the framework of two-alternative forced-choice tasks, but less well understood within the cadre of naturalistic action-perception loops. Here, we examine the process of disambiguating retinal motion caused by self- and/or object-motion during closed-loop navigation. First, we derive a normative account specifying how observers ought to intercept hidden and moving targets given their belief about (i) whether retinal motion was caused by the target moving, and (ii) if so, with what velocity. Next, in line with the modelling results, we show that humans report targets as stationary and steer towards their initial rather than final position more often when they are themselves moving, suggesting a putative misattribution of object-motion to the self. Further, we predict that observers should misattribute retinal motion more often: (i) during passive rather than active self-motion (given the lack of an efference copy informing self-motion estimates in the former), and (ii) when targets are presented eccentrically rather than centrally (given that lateral self-motion flow vectors are larger at eccentric locations during forward self-motion). Results support both of these predictions. Lastly, analysis of eye movements show that, while initial saccades toward targets were largely accurate regardless of the self-motion condition, subsequent gaze pursuit was modulated by target velocity during object-only motion, but not during concurrent object- and self-motion. These results demonstrate CI within action-perception loops, and suggest a protracted temporal unfolding of the computations characterizing CI. This article is part of the theme issue 'Decision and control processes in multisensory perception'.


Assuntos
Percepção de Movimento , Humanos , Movimentos Oculares , Movimento (Física) , Movimentos Sacádicos , Orientação , Estimulação Luminosa
4.
bioRxiv ; 2023 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-36993305

RESUMO

Neurons throughout the brain modulate their firing rate lawfully in response to changes in sensory input. Theories of neural computation posit that these modulations reflect the outcome of a constrained optimization: neurons aim to efficiently and robustly represent sensory information under resource limitations. Our understanding of how this optimization varies across the brain, however, is still in its infancy. Here, we show that neural responses transform along the dorsal stream of the visual system in a manner consistent with a transition from optimizing for information preservation to optimizing for perceptual discrimination. Focusing on binocular disparity - the slight differences in how objects project to the two eyes - we re-analyze measurements from neurons characterizing tuning curves in macaque monkey brain regions V1, V2, and MT, and compare these to measurements of the natural visual statistics of binocular disparity. The changes in tuning curve characteristics are computationally consistent with a shift in optimization goals from maximizing the information encoded about naturally occurring binocular disparities to maximizing the ability to support fine disparity discrimination. We find that a change towards tuning curves preferring larger disparities is a key driver of this shift. These results provide new insight into previously-identified differences between disparity-selective regions of cortex and suggest these differences play an important role in supporting visually-guided behavior. Our findings support a key re-framing of optimal coding in regions of the brain that contain sensory information, emphasizing the need to consider not just information preservation and neural resources, but also relevance to behavior.

5.
J Neurosci ; 43(11): 1888-1904, 2023 03 15.
Artigo em Inglês | MEDLINE | ID: mdl-36725323

RESUMO

Smooth eye movements are common during natural viewing; we frequently rotate our eyes to track moving objects or to maintain fixation on an object during self-movement. Reliable information about smooth eye movements is crucial to various neural computations, such as estimating heading from optic flow or judging depth from motion parallax. While it is well established that extraretinal signals (e.g., efference copies of motor commands) carry critical information about eye velocity, the rotational optic flow field produced by eye rotations also carries valuable information. Although previous work has shown that dynamic perspective cues in optic flow can be used in computations that require estimates of eye velocity, it has remained unclear where and how the brain processes these visual cues and how they are integrated with extraretinal signals regarding eye rotation. We examined how neurons in the dorsal region of the medial superior temporal area (MSTd) of two male rhesus monkeys represent the direction of smooth pursuit eye movements based on both visual cues (dynamic perspective) and extraretinal signals. We find that most MSTd neurons have matched preferences for the direction of eye rotation based on visual and extraretinal signals. Moreover, neural responses to combinations of these signals are well predicted by a weighted linear summation model. These findings demonstrate a neural substrate for representing the velocity of smooth eye movements based on rotational optic flow and establish area MSTd as a key node for integrating visual and extraretinal signals into a more generalized representation of smooth eye movements.SIGNIFICANCE STATEMENT We frequently rotate our eyes to smoothly track objects of interest during self-motion. Information about eye velocity is crucial for a variety of computations performed by the brain, including depth perception and heading perception. Traditionally, information about eye rotation has been thought to arise mainly from extraretinal signals, such as efference copies of motor commands. Previous work shows that eye velocity can also be inferred from rotational optic flow that accompanies smooth eye movements, but the neural origins of these visual signals about eye rotation have remained unknown. We demonstrate that macaque neurons signal the direction of smooth eye rotation based on visual signals, and that they integrate both visual and extraretinal signals regarding eye rotation in a congruent fashion.


Assuntos
Percepção de Movimento , Fluxo Óptico , Animais , Masculino , Movimentos Oculares , Sinais (Psicologia) , Acompanhamento Ocular Uniforme , Neurônios/fisiologia , Macaca mulatta , Percepção de Movimento/fisiologia , Estimulação Luminosa
6.
bioRxiv ; 2023 Jan 30.
Artigo em Inglês | MEDLINE | ID: mdl-36778376

RESUMO

A key computation in building adaptive internal models of the external world is to ascribe sensory signals to their likely cause(s), a process of Bayesian Causal Inference (CI). CI is well studied within the framework of two-alternative forced-choice tasks, but less well understood within the cadre of naturalistic action-perception loops. Here, we examine the process of disambiguating retinal motion caused by self- and/or object-motion during closed-loop navigation. First, we derive a normative account specifying how observers ought to intercept hidden and moving targets given their belief over (i) whether retinal motion was caused by the target moving, and (ii) if so, with what velocity. Next, in line with the modeling results, we show that humans report targets as stationary and steer toward their initial rather than final position more often when they are themselves moving, suggesting a misattribution of object-motion to the self. Further, we predict that observers should misattribute retinal motion more often: (i) during passive rather than active self-motion (given the lack of an efference copy informing self-motion estimates in the former), and (ii) when targets are presented eccentrically rather than centrally (given that lateral self-motion flow vectors are larger at eccentric locations during forward self-motion). Results confirm both of these predictions. Lastly, analysis of eye-movements show that, while initial saccades toward targets are largely accurate regardless of the self-motion condition, subsequent gaze pursuit was modulated by target velocity during object-only motion, but not during concurrent object- and self-motion. These results demonstrate CI within action-perception loops, and suggest a protracted temporal unfolding of the computations characterizing CI.

7.
Sci Rep ; 12(1): 18480, 2022 11 02.
Artigo em Inglês | MEDLINE | ID: mdl-36323845

RESUMO

An important function of the visual system is to represent 3D scene structure from a sequence of 2D images projected onto the retinae. During observer translation, the relative image motion of stationary objects at different distances (motion parallax) provides potent depth information. However, if an object moves relative to the scene, this complicates the computation of depth from motion parallax since there will be an additional component of image motion related to scene-relative object motion. To correctly compute depth from motion parallax, only the component of image motion caused by self-motion should be used by the brain. Previous experimental and theoretical work on perception of depth from motion parallax has assumed that objects are stationary in the world. Thus, it is unknown whether perceived depth based on motion parallax is biased by object motion relative to the scene. Naïve human subjects viewed a virtual 3D scene consisting of a ground plane and stationary background objects, while lateral self-motion was simulated by optic flow. A target object could be either stationary or moving laterally at different velocities, and subjects were asked to judge the depth of the object relative to the plane of fixation. Subjects showed a far bias when object and observer moved in the same direction, and a near bias when object and observer moved in opposite directions. This pattern of biases is expected if subjects confound image motion due to self-motion with that due to scene-relative object motion. These biases were large when the object was viewed monocularly, and were greatly reduced, but not eliminated, when binocular disparity cues were provided. Our findings establish that scene-relative object motion can confound perceptual judgements of depth during self-motion.


Assuntos
Percepção de Movimento , Fluxo Óptico , Humanos , Disparidade Visual , Movimento (Física) , Sinais (Psicologia) , Viés , Percepção de Profundidade
8.
Elife ; 112022 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-35642599

RESUMO

Detection of objects that move in a scene is a fundamental computation performed by the visual system. This computation is greatly complicated by observer motion, which causes most objects to move across the retinal image. How the visual system detects scene-relative object motion during self-motion is poorly understood. Human behavioral studies suggest that the visual system may identify local conflicts between motion parallax and binocular disparity cues to depth and may use these signals to detect moving objects. We describe a novel mechanism for performing this computation based on neurons in macaque middle temporal (MT) area with incongruent depth tuning for binocular disparity and motion parallax cues. Neurons with incongruent tuning respond selectively to scene-relative object motion, and their responses are predictive of perceptual decisions when animals are trained to detect a moving object during self-motion. This finding establishes a novel functional role for neurons with incongruent tuning for multiple depth cues.


Assuntos
Percepção de Movimento , Animais , Sinais (Psicologia) , Movimento (Física) , Percepção de Movimento/fisiologia , Lobo Temporal/fisiologia , Disparidade Visual
9.
J Neurosci ; 42(7): 1235-1253, 2022 02 16.
Artigo em Inglês | MEDLINE | ID: mdl-34911796

RESUMO

There are two distinct sources of retinal image motion: objects moving in the world and observer movement. When the eyes move to track a target of interest, the retinal velocity of some object in the scene will depend on both eye velocity and that object's motion in the world. Thus, to compute the object's velocity relative to the head, a coordinate transformation must be performed by vectorially adding eye velocity and retinal velocity. In contrast, a very different interaction between retinal and eye velocity signals has been proposed to underlie estimation of depth from motion parallax, which involves computing the ratio of retinal and eye velocities. We examined how neurons in the middle temporal (MT) area of male macaques combine eye velocity and retinal velocity, to test whether this interaction is more consistent with a partial coordinate transformation (for computing head-centered object motion) or a multiplicative gain interaction (for computing depth from motion parallax). We find that some MT neurons show clear signatures of a partial coordinate transformation for computing head-centered velocity. Even a small shift toward head-centered velocity tuning can account for the observed depth-sign selectivity of MT neurons, including a strong dependence on speed preference that was previously unexplained. A formal model comparison reveals that the data from many MT neurons are equally well explained by a multiplicative gain interaction or a partial transformation toward head-centered tuning, although some responses can only be well fit by the coordinate transform model. Our findings shed new light on the neural computations performed in area MT, and raise the possibility that depth-sign selectivity emerges from a partial coordinate transformation toward representing head-centered velocity.SIGNIFICANCE STATEMENT Eye velocity signals modulate the responses of neurons in the middle temporal (MT) area to retinal image motion. Two different types of interactions between retinal and eye velocities have previously been considered: a vector addition computation for computing head-centered velocity, and a multiplicative gain interaction for computing depth from motion parallax. Whereas previous work favored a multiplicative gain interaction in MT, we demonstrate that some MT neurons show clear evidence of a partial shift toward coding head-centered velocity. Moreover, we demonstrate that even a small shift toward head coordinates is sufficient to account for the depth-sign selectivity observed previously in area MT, thus raising the possibility that a partial coordinate transformation may also provide a mechanism for computing depth from motion parallax.


Assuntos
Modelos Neurológicos , Percepção de Movimento/fisiologia , Neurônios/fisiologia , Lobo Temporal/fisiologia , Animais , Macaca mulatta , Masculino
10.
J Neurosci ; 41(49): 10108-10119, 2021 12 08.
Artigo em Inglês | MEDLINE | ID: mdl-34716232

RESUMO

Multisensory plasticity enables our senses to dynamically adapt to each other and the external environment, a fundamental operation that our brain performs continuously. We searched for neural correlates of adult multisensory plasticity in the dorsal medial superior temporal area (MSTd) and the ventral intraparietal area (VIP) in 2 male rhesus macaques using a paradigm of supervised calibration. We report little plasticity in neural responses in the relatively low-level multisensory cortical area MSTd. In contrast, neural correlates of plasticity are found in higher-level multisensory VIP, an area with strong decision-related activity. Accordingly, we observed systematic shifts of VIP tuning curves, which were reflected in the choice-related component of the population response. This is the first demonstration of neuronal calibration, together with behavioral calibration, in single sessions. These results lay the foundation for understanding multisensory neural plasticity, applicable broadly to maintaining accuracy for sensorimotor tasks.SIGNIFICANCE STATEMENT Multisensory plasticity is a fundamental and continual function of the brain that enables our senses to adapt dynamically to each other and to the external environment. Yet, very little is known about the neuronal mechanisms of multisensory plasticity. In this study, we searched for neural correlates of adult multisensory plasticity in the dorsal medial superior temporal area (MSTd) and the ventral intraparietal area (VIP) using a paradigm of supervised calibration. We found little plasticity in neural responses in the relatively low-level multisensory cortical area MSTd. By contrast, neural correlates of plasticity were found in VIP, a higher-level multisensory area with strong decision-related activity. This is the first demonstration of neuronal calibration, together with behavioral calibration, in single sessions.


Assuntos
Plasticidade Neuronal/fisiologia , Lobo Parietal/fisiologia , Lobo Temporal/fisiologia , Animais , Macaca mulatta , Masculino
11.
J Neurosci ; 41(14): 3254-3265, 2021 04 07.
Artigo em Inglês | MEDLINE | ID: mdl-33622780

RESUMO

Perceptual decision-making is increasingly being understood to involve an interaction between bottom-up sensory-driven signals and top-down choice-driven signals, but how these signals interact to mediate perception is not well understood. The parieto-insular vestibular cortex (PIVC) is an area with prominent vestibular responsiveness, and previous work has shown that inactivating PIVC impairs vestibular heading judgments. To investigate the nature of PIVC's contribution to heading perception, we recorded extracellularly from PIVC neurons in two male rhesus macaques during a heading discrimination task, and compared findings with data from previous studies of dorsal medial superior temporal (MSTd) and ventral intraparietal (VIP) areas using identical stimuli. By computing partial correlations between neural responses, heading, and choice, we find that PIVC activity reflects a dynamically changing combination of sensory and choice signals. In addition, the sensory and choice signals are more balanced in PIVC, in contrast to the sensory dominance in MSTd and choice dominance in VIP. Interestingly, heading and choice signals in PIVC are negatively correlated during the middle portion of the stimulus epoch, reflecting a mismatch in the polarity of heading and choice signals. We anticipate that these results will help unravel the mechanisms of interaction between bottom-up sensory signals and top-down choice signals in perceptual decision-making, leading to more comprehensive models of self-motion perception.SIGNIFICANCE STATEMENT Vestibular information is important for our perception of self-motion, and various cortical regions in primates show vestibular heading selectivity. Inactivation of the macaque vestibular cortex substantially impairs the precision of vestibular heading discrimination, more so than inactivation of other multisensory areas. Here, we record for the first time from the vestibular cortex while monkeys perform a forced-choice heading discrimination task, and we compare results with data collected previously from other multisensory cortical areas. We find that vestibular cortex activity reflects a dynamically changing combination of sensory and choice signals, with both similarities and notable differences with other multisensory areas.


Assuntos
Comportamento de Escolha/fisiologia , Movimentos da Cabeça/fisiologia , Percepção de Movimento/fisiologia , Lobo Parietal/fisiologia , Córtex Somatossensorial/fisiologia , Vestíbulo do Labirinto/fisiologia , Animais , Córtex Cerebral/diagnóstico por imagem , Córtex Cerebral/fisiologia , Aprendizagem por Discriminação/fisiologia , Macaca mulatta , Imageamento por Ressonância Magnética/métodos , Masculino , Lobo Parietal/diagnóstico por imagem , Estimulação Luminosa/métodos , Córtex Somatossensorial/diagnóstico por imagem , Vestíbulo do Labirinto/diagnóstico por imagem
12.
J Vis ; 20(10): 8, 2020 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-33016983

RESUMO

During self-motion, an independently moving object generates retinal motion that is the vector sum of its world-relative motion and the optic flow caused by the observer's self-motion. A hypothesized mechanism for the computation of an object's world-relative motion is flow parsing, in which the optic flow field due to self-motion is globally subtracted from the retinal flow field. This subtraction generates a bias in perceived object direction (in retinal coordinates) away from the optic flow vector at the object's location. Despite psychophysical evidence for flow parsing in humans, the neural mechanisms underlying the process are unknown. To build the framework for investigation of the neural basis of flow parsing, we trained macaque monkeys to discriminate the direction of a moving object in the presence of optic flow simulating self-motion. Like humans, monkeys showed biases in object direction perception consistent with subtraction of background optic flow attributable to self-motion. The size of perceptual biases generally depended on the magnitude of the expected optic flow vector at the location of the object, which was contingent on object position and self-motion velocity. There was a modest effect of an object's depth on flow-parsing biases, which reached significance in only one of two subjects. Adding vestibular self-motion signals to optic flow facilitated flow parsing, increasing biases in direction perception. Our findings indicate that monkeys exhibit perceptual hallmarks of flow parsing, setting the stage for the examination of the neural mechanisms underlying this phenomenon.


Assuntos
Percepção de Movimento , Fluxo Óptico/fisiologia , Animais , Haplorrinos , Humanos , Macaca mulatta , Masculino , Retina/fisiologia
13.
eNeuro ; 7(6)2020.
Artigo em Inglês | MEDLINE | ID: mdl-33127626

RESUMO

When the eyes rotate during translational self-motion, the focus of expansion (FOE) in optic flow no longer indicates heading, yet heading judgements are largely unbiased. Much emphasis has been placed on the role of extraretinal signals in compensating for the visual consequences of eye rotation. However, recent studies also support a purely visual mechanism of rotation compensation in heading-selective neurons. Computational theories support a visual compensatory strategy but require different visual depth cues. We examined the rotation tolerance of heading tuning in macaque area MSTd using two different virtual environments, a frontoparallel (2D) wall and a 3D cloud of random dots. Both environments contained rotational optic flow cues (i.e., dynamic perspective), but only the 3D cloud stimulus contained local motion parallax cues, which are required by some models. The 3D cloud environment did not enhance the rotation tolerance of heading tuning for individual MSTd neurons, nor the accuracy of heading estimates decoded from population activity, suggesting a key role for dynamic perspective cues. We also added vestibular translation signals to optic flow, to test whether rotation tolerance is enhanced by non-visual cues to heading. We found no benefit of vestibular signals overall, but a modest effect for some neurons with significant vestibular heading tuning. We also find that neurons with more rotation tolerant heading tuning typically are less selective to pure visual rotation cues. Together, our findings help to clarify the types of information that are used to construct heading representations that are tolerant to eye rotations.


Assuntos
Sinais (Psicologia) , Percepção de Movimento , Animais , Macaca mulatta , Estimulação Luminosa , Lobo Temporal
14.
Curr Opin Physiol ; 16: 8-13, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32968701

RESUMO

Neurophysiological studies of multisensory processing have largely focused on how the brain integrates information from different sensory modalities to form a coherent percept. However, in the natural environment, an important extra step is needed: the brain faces the problem of causal inference, which involves determining whether different sources of sensory information arise from the same environmental cause, such that integrating them is advantageous Behavioral and computational studies have provided a strong foundation for studying causal inference, but studies of its neural basis have only recently been undertaken. This review focuses on recent advances regarding how the brain infers the causes of sensory inputs and uses this information to make robust perceptual estimates.

15.
Nat Neurosci ; 23(8): 1004-1015, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32541964

RESUMO

Neurons represent spatial information in diverse reference frames, but it remains unclear whether neural reference frames change with task demands and whether these changes can account for behavior. In this study, we examined how neurons represent the direction of a moving object during self-motion, while monkeys switched, from trial to trial, between reporting object direction in head- and world-centered reference frames. Self-motion information is needed to compute object motion in world coordinates but should be ignored when judging object motion in head coordinates. Neural responses in the ventral intraparietal area are modulated by the task reference frame, such that population activity represents object direction in either reference frame. In contrast, responses in the lateral portion of the medial superior temporal area primarily represent object motion in head coordinates. Our findings demonstrate a neural representation of object motion that changes with task requirements.


Assuntos
Potenciais de Ação/fisiologia , Percepção de Movimento/fisiologia , Neurônios/fisiologia , Lobo Parietal/fisiologia , Animais , Macaca mulatta , Masculino , Estimulação Luminosa , Percepção Espacial/fisiologia
16.
Neuron ; 106(4): 662-674.e5, 2020 05 20.
Artigo em Inglês | MEDLINE | ID: mdl-32171388

RESUMO

To take the best actions, we often need to maintain and update beliefs about variables that cannot be directly observed. To understand the principles underlying such belief updates, we need tools to uncover subjects' belief dynamics from natural behavior. We tested whether eye movements could be used to infer subjects' beliefs about latent variables using a naturalistic navigation task. Humans and monkeys navigated to a remembered goal location in a virtual environment that provided optic flow but lacked explicit position cues. We observed eye movements that appeared to continuously track the goal location even when no visible target was present there. Accurate goal tracking was associated with improved task performance, and inhibiting eye movements in humans impaired navigation precision. These results suggest that gaze dynamics play a key role in action selection during challenging visuomotor behaviors and may possibly serve as a window into the subject's dynamically evolving internal beliefs.


Assuntos
Tomada de Decisões/fisiologia , Fixação Ocular/fisiologia , Modelos Neurológicos , Navegação Espacial/fisiologia , Adolescente , Adulto , Animais , Feminino , Humanos , Macaca mulatta , Masculino , Adulto Jovem
17.
Cereb Cortex ; 30(4): 2658-2672, 2020 04 14.
Artigo em Inglês | MEDLINE | ID: mdl-31828299

RESUMO

Visual motion processing is a well-established model system for studying neural population codes in primates. The common marmoset, a small new world primate, offers unparalleled opportunities to probe these population codes in key motion processing areas, such as cortical areas MT and MST, because these areas are accessible for imaging and recording at the cortical surface. However, little is currently known about the perceptual abilities of the marmoset. Here, we introduce a paradigm for studying motion perception in the marmoset and compare their psychophysical performance with human observers. We trained two marmosets to perform a motion estimation task in which they provided an analog report of their perceived direction of motion with an eye movement to a ring that surrounded the motion stimulus. Marmosets and humans exhibited similar trade-offs in speed versus accuracy: errors were larger and reaction times were longer as the strength of the motion signal was reduced. Reverse correlation on the temporal fluctuations in motion direction revealed that both species exhibited short integration windows; however, marmosets had substantially less nondecision time than humans. Our results provide the first quantification of motion perception in the marmoset and demonstrate several advantages to using analog estimation tasks.


Assuntos
Movimentos Oculares/fisiologia , Percepção de Movimento/fisiologia , Estimulação Luminosa/métodos , Tempo de Reação/fisiologia , Córtex Visual/fisiologia , Adulto , Animais , Callithrix , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Especificidade da Espécie , Adulto Jovem
18.
J Neurosci ; 40(5): 1066-1083, 2020 01 29.
Artigo em Inglês | MEDLINE | ID: mdl-31754013

RESUMO

Identifying the features of population responses that are relevant to the amount of information encoded by neuronal populations is a crucial step toward understanding population coding. Statistical features, such as tuning properties, individual and shared response variability, and global activity modulations, could all affect the amount of information encoded and modulate behavioral performance. We show that two features in particular affect information: the modulation of population responses across conditions (population signal) and the inverse population covariability along the modulation axis (projected precision). We demonstrate that fluctuations of these two quantities are correlated with fluctuations of behavioral performance in various tasks and brain regions consistently across 4 monkeys (1 female and 1 male Macaca mulatta; and 2 male Macaca fascicularis). In contrast, fluctuations in mean correlations among neurons and global activity have negligible or inconsistent effects on the amount of information encoded and behavioral performance. We also show that differential correlations reduce the amount of information encoded in finite populations by reducing projected precision. Our results are consistent with predictions of a model that optimally decodes population responses to produce behavior.SIGNIFICANCE STATEMENT The last two or three decades of research have seen hot debates about what features of population tuning and trial-by-trial variability influence the information carried by a population of neurons, with some camps arguing, for instance, that mean pairwise correlations or global fluctuations are important while other camps report opposite results. In this study, we identify the most important features of neural population responses that determine the amount of encoded information and behavioral performance by combining analytic calculations with a novel nonparametric method that allows us to isolate the effects of different statistical features. We tested our hypothesis on 4 macaques, three decision-making tasks, and two brain areas. The predictions of our theory were in agreement with the experimental data.


Assuntos
Redes Neurais de Computação , Neurônios/fisiologia , Córtex Pré-Frontal/fisiologia , Desempenho Psicomotor/fisiologia , Lobo Temporal/fisiologia , Animais , Atenção/fisiologia , Comportamento Animal , Análise Discriminante , Feminino , Macaca fascicularis , Macaca mulatta , Masculino , Modelos Neurológicos , Percepção de Movimento/fisiologia , Percepção Visual/fisiologia
19.
Proc Natl Acad Sci U S A ; 116(18): 9060-9065, 2019 04 30.
Artigo em Inglês | MEDLINE | ID: mdl-30996126

RESUMO

The brain infers our spatial orientation and properties of the world from ambiguous and noisy sensory cues. Judging self-motion (heading) in the presence of independently moving objects poses a challenging inference problem because the image motion of an object could be attributed to movement of the object, self-motion, or some combination of the two. We test whether perception of heading and object motion follows predictions of a normative causal inference framework. In a dual-report task, subjects indicated whether an object appeared stationary or moving in the virtual world, while simultaneously judging their heading. Consistent with causal inference predictions, the proportion of object stationarity reports, as well as the accuracy and precision of heading judgments, depended on the speed of object motion. Critically, biases in perceived heading declined when the object was perceived to be moving in the world. Our findings suggest that the brain interprets object motion and self-motion using a causal inference framework.


Assuntos
Percepção de Movimento/fisiologia , Percepção Espacial/fisiologia , Percepção Visual/fisiologia , Adulto , Animais , Sinais (Psicologia) , Feminino , Voluntários Saudáveis , Humanos , Julgamento/fisiologia , Macaca mulatta , Masculino , Movimento (Física) , Movimento/fisiologia , Orientação/fisiologia , Estimulação Luminosa/métodos
20.
eNeuro ; 6(2)2019.
Artigo em Inglês | MEDLINE | ID: mdl-30923736

RESUMO

Creating three-dimensional (3D) representations of the world from two-dimensional retinal images is fundamental to visually guided behaviors including reaching and grasping. A critical component of this process is determining the 3D orientation of objects. Previous studies have shown that neurons in the caudal intraparietal area (CIP) of the macaque monkey represent 3D planar surface orientation (i.e., slant and tilt). Here we compare the responses of neurons in areas V3A (which is implicated in 3D visual processing and precedes CIP in the visual hierarchy) and CIP to 3D-oriented planar surfaces. We then examine whether activity in these areas correlates with perception during a fine slant discrimination task in which the monkeys report if the top of a surface is slanted toward or away from them. Although we find that V3A and CIP neurons show similar sensitivity to planar surface orientation, significant choice-related activity during the slant discrimination task is rare in V3A but prominent in CIP. These results implicate both V3A and CIP in the representation of 3D surface orientation, and suggest a functional dissociation between the areas based on slant-related choice signals.


Assuntos
Lobo Parietal/fisiologia , Percepção Espacial/fisiologia , Percepção Visual/fisiologia , Animais , Comportamento de Escolha/fisiologia , Macaca mulatta , Masculino , Neurônios/fisiologia , Orientação/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...